770 research outputs found

    Mining requirements links

    Full text link
    [Context & motivation] Obtaining traceability among requirements and between requirements and other artifacts is an extremely important activity in practice, an interesting area for theoretical study, and a major hurdle in common industrial experience. Substantial effort is spent on establishing and updating such links in any large project - even more so when requirements refer to a product family. [Question/problem] While most research is concerned with ways to reduce the effort needed to establish and maintain traceability links, a different question can also be asked: how is it possible to harness the vast amount of implicit (and tacit) knowledge embedded in already-established links? Is there something to be learned about a specific problem or domain, or about the humans who establish traces, by studying such traces? [Principal ideas/results] In this paper, we present preliminary results from a study applying different machine learning techniques to an industrial case study, and test to what degree common hypothesis hold in our case. [Contribution] Reshaping traceability data into knowledge can contribute to more effective automatic tools to suggest candidates for linking, to inform improvements in writing style, and at the same time provide some insight into both the domain of interest and the actual implementation techniques. © 2011 Springer-Verlag Berlin Heidelberg

    On the interplay between consistency, completeness, and correctness in requirements evolution

    Full text link
    The initial expression of requirements for a computer-based system is often informal and possibly vague. Requirements engineers need to examine this often incomplete and inconsistent brief expression of needs. Based on the available knowledge and expertise, assumptions are made and conclusions are deduced to transform this 'rough sketch' into more complete, consistent, and hence correct requirements. This paper addresses the question of how to characterize these properties in an evolutionary framework, and what relationships link these properties to a customer's view of correctness. Moreover, we describe in rigorous terms the different kinds of validation checks that must be performed on different parts of a requirements specification in order to ensure that errors (i.e. cases of inconsistency and incompleteness) are detected and marked as such, leading to better quality requirements. © 2003 Elsevier B.V. All rights reserved

    Conflict characterization and Analysis of Non Functional Requirements: An experimental approach

    Full text link
    Prior studies reveal that conflicts among Non Functional Requirements (NFRs) are not always absolute. They can also be relative depending on the context of the system being developed. Given that existing techniques to manage the NFRs conflicts are mainly focused on cataloguing the interrelationships among various types of NFRs, hence a technique to manage the NFRs conflicts with respect to NFRs relative characteristic is needed. This paper presents a novel framework to manage the conflicts among NFRs with respect to NFRs relative characteristic. By applying an experimental approach, the quantitative evidence of NFRs conflicts will be obtained and modeled. NFRs metrics and measures will be used in the experiments as parameters to generate the quantitative evidence. This evidence can then allow developers to identify and reason about the NFRs conflicts. We also provide an example of how this framework could be applied. © 2013 IEEE

    Optimal-constraint lexicons for requirements specifications

    Full text link
    Constrained Natural Languages (CNLs) are becoming an increasingly popular way of writing technical documents such as requirements specifications. This is because CNLs aim to reduce the ambiguity inherent within natural languages, whilst maintaining their readability and expressiveness. The design of existing CNLs appears to be unfocused towards achieving specific quality outcomes, in that the majority of lexical selections have been based upon lexicographer preferences rather than an optimum trade-off between quality factors such as ambiguity, readability, expressiveness, and lexical magnitude. In this paper we introduce the concept of 'replaceability' as a way of identifying the lexical redundancy inherent within a sample of requirements. Our novel and practical approach uses Natural Language Processing (NLP) techniques to enable us to make dynamic trade-offs between quality factors to optimise the resultant CNL. We also challenge the concept of a CNL being a one-dimensional static language, and demonstrate that our optimal-constraint process results in a CNL that can adapt to a changing domain while maintaining its expressiveness. © Springer-Verlag Berlin Heidelberg 2007

    A Logical Approach to Cooperative Information Systems

    Get PDF
    ``Cooperative information system management'' refers to the capacity of several computing systems to communicate and cooperate in order to acquire, store, manage, query data and knowledge. Current solutions to the problem of cooperative information management are still far from being satisfactory. In particular, they lack the ability to fully model cooperation among heterogeneous systems according to a declarative style. The use of a logical approach to model all aspects of cooperation seems very promising. In this paper, we de®ne a logical language able to support cooperative queries, updates and update propagation. We model the sources of information as deductive databases, sharing the same logical language to ex- press queries and updates, but containing independent, even if possibly related, data. We use the Obj-U-Datalog (E. Bertino, G. Guerrini, D. Montesi, Toward deductive object data- bases, Theory and Practice of Object Systems 1 (1) (1995) 19±39) language to model queries and transactions in each source of data. Such language is then extended to deal with active rules in the style of Active-U-Datalog (E. Bertino, B. Catania, V. Gervasi, A. Ra aet a, Ac- tive-U-Datalog: Integrating active rules in a logical update language, in: B. Freitag, H. Decker, M. Kifer, A. Voronkov (Eds.), LBCS 1472: Transactions and Change in Login Databases, 1998, pp. 106±132), interpreted according to the PARK semantics proposed in G. Gottlob, G. Moerkotte, V.S. Subrahmanian (The PARK semantics for active rules, in: P.M.G. Apers, M. Bouzeghoub, G. Gardarin (Eds.), LNCS 1057: Proceedings of the Fifth International Con- ference on Extending Database Technology, 1996, pp. 35±55). By using active rules, a system can e ciently perform update propagation among di erent databases. The result is a logical environment, integrating active and deductive rules, to perform update propagation in a cooperative framework

    Nuclear and Non-Ionizing Energy-Loss for Coulomb Scattered Particles from Low Energy up to Relativistic Regime in Space Radiation Environment

    Full text link
    In the space environment, instruments onboard of spacecrafts can be affected by displacement damage due to radiation. The differential scattering cross section for screened nucleus--nucleus interactions - i.e., including the effects due to screened Coulomb nuclear fields -, nuclear stopping powers and non-ionization energy losses are treated from about 50 keV/nucleon up to relativistic energies.Comment: Accepted for publication in the Proceedings of the ICATPP Conference on Cosmic Rays for Particle and Astroparticle Physics, Villa Olmo (Como, Italy), 7--8 October, 2010, to be published by World Scientifi

    Supporting Analysts by Dynamic Extraction and Classification of Requirements-Related Knowledge

    Full text link
    © 2019 IEEE. In many software development projects, analysts are required to deal with systems' requirements from unfamiliar domains. Familiarity with the domain is necessary in order to get full leverage from interaction with stakeholders and for extracting relevant information from the existing project documents. Accurate and timely extraction and classification of requirements knowledge support analysts in this challenging scenario. Our approach is to mine real-time interaction records and project documents for the relevant phrasal units about the requirements related topics being discussed during elicitation. We propose to use both generative and discriminating methods. To extract the relevant terms, we leverage the flexibility and power of Weighted Finite State Transducers (WFSTs) in dynamic modelling of natural language processing tasks. We used an extended version of Support Vector Machines (SVMs) with variable-sized feature vectors to efficiently and dynamically extract and classify requirements-related knowledge from the existing documents. To evaluate the performance of our approach intuitively and quantitatively, we used edit distance and precision/recall metrics. We show in three case studies that the snippets extracted by our method are intuitively relevant and reasonably accurate. Furthermore, we found that statistical and linguistic parameters such as smoothing methods, and words contiguity and order features can impact the performance of both extraction and classification tasks

    A linguistic-engineering approach to large-scale requirements management

    Full text link
    corecore